1
PyTorch 入門:為什麼張量如此重要
EvoClass-AI002Lecture 1
00:00

PyTorch 入門:為什麼張量如此重要

PyTorch 是一個高度靈活、動態的開源框架,廣受深度學習研究與快速原型設計的青睞。其核心在於 張量 ,它是一種不可或缺的資料結構。張量是一種多維陣列,專為高效處理深度學習模型所需的數值運算而設計,並自動支援 GPU 加速 自動加速。

1. 理解張量結構

在 PyTorch 中,每一筆輸入、輸出與模型參數都封裝在張量中。它們的功能與 NumPy 陣列相同,但針對特殊硬體(如 GPU)進行優化,使其在處理神經網絡所需的大型線性代數運算時效率更高。

定義張量的關鍵屬性包括:

  • 形狀:定義資料的維度,以元組形式表示(例如,一組影像的批量大小為 $4 \times 32 \times 32$)。
  • 資料類型:指定所儲存元素的數值類型(例如, torch.float32 用於模型權重, torch.int64 用於索引)。
  • 設備:表示物理硬體位置,通常為 'cpu''cuda' (NVIDIA GPU)。
動態圖與自動微分(Autograd)
PyTorch 使用指令式執行模型,表示計算圖是在操作執行時逐步建立的。這使得內建的自動微分引擎 Autograd可以追蹤張量上的每一個操作,只要設置了 requires_grad=True 屬性即可,讓反向傳播期間能輕鬆計算梯度。
fundamentals.py
TERMINALbash — pytorch-env
> Ready. Click "Run" to execute.
>
TENSOR INSPECTOR Live

Run code to inspect active tensors
Question 1
Which command creates a $5 \times 5$ tensor containing random numbers following a uniform distribution between 0 and 1?
torch.rand(5, 5)
torch.random(5, 5)
torch.uniform(5, 5)
torch.randn(5, 5)
Question 2
If tensor $A$ is on the CPU, and tensor $B$ is on the CUDA device, what happens if you try to compute $A + B$?
An error occurs because operations require tensors on the same device.
PyTorch automatically moves $A$ to the CUDA device and proceeds.
The operation is performed on the CPU, and the result is returned to the CPU.
Question 3
What is the most common data type (dtype) used for model weights and intermediate calculations in Deep Learning?
torch.float32 (single-precision floating point)
torch.int64 (long integer)
torch.bool
torch.float64 (double-precision floating point)
Challenge: Tensor Manipulation and Shape
Prepare a tensor for a specific matrix operation.
You have a feature vector $F$ of shape $(10,)$. You need to multiply it by a weight matrix $W$ of shape $(10, 5)$. For matrix multiplication (MatMul) to work, $F$ must be 2-dimensional.
Step 1
What should the shape of $F$ be before multiplication with $W$?
Solution:
The inner dimensions must match, so $F$ must be $(1, 10)$. Then $(1, 10) @ (10, 5) \rightarrow (1, 5)$.
Code: F_new = F.unsqueeze(0) or F_new = F.view(1, -1)
Step 2
Perform the matrix multiplication between $F_{new}$ and $W$ (shape $(10, 5)$).
Solution:
The operation is straightforward MatMul.
Code: output = F_new @ W or output = torch.matmul(F_new, W)
Step 3
Which method explicitly returns a tensor with the specified dimensions, allowing you to flatten the tensor back to $(50,)$? (Assume $F$ was $(5, 10)$ initially and is now flattened.)
Solution:
Use the view or reshape methods. The fastest way to flatten is often using -1 for one dimension.
Code: F_flat = F.view(-1) or F_flat = F.reshape(50)